On penalised likelihood and bias reduction
نویسنده
چکیده
We propose a likelihood function endowed with a penalisation that reduces the bias of the maximum likelihood estimator in regular parametric models. The penalisation hinges on the first two derivatives of the log likelihood and can be computed numerically. The asymptotic properties and the sensitivity to nuisance parameters of the penalised likelihood and derived quantities are addressed. In models for stratified data in a two-index asymptotic setting, the bias of the penalised profile score function is found to be equivalent to the bias of a modified profile score function.
منابع مشابه
Performance of Likelihood-Based Estimation Methods for Multilevel Binary Regression Models
By means of a fractional factorial simulation experiment, we compare the performance of Penalised Quasi-Likelihood, Non-Adaptive Gaussian Quadrature and Adaptive Gaussian Quadrature in estimating parameters for multi-level logistic regression models. The comparison is done in terms of bias, mean squared error, numerical convergence, and computational efficiency. It turns out that, in terms of M...
متن کاملVariable selection for multivariate failure time data.
In this paper, we proposed a penalised pseudo-partial likelihood method for variable selection with multivariate failure time data with a growing number of regression coefficients. Under certain regularity conditions, we show the consistency and asymptotic normality of the penalised likelihood estimators. We further demonstrate that, for certain penalty functions with proper choices of regulari...
متن کاملSelecting Hidden Markov Model State Number with Cross-Validated Likelihood
The problem of estimating the number of hidden states in a hidden Markov model is considered. Emphasis is placed on cross-validated likelihood criteria. Using crossvalidation to assess the number of hidden states allows to circumvent the well documented technical difficulties of the order identification problem in mixture models. Moreover, in a predictive perspective, it does not require that t...
متن کاملCovariance selection and estimation via penalised normal likelihood
We propose a nonparametric method to identify parsimony and to produce a statistically efficient estimator of a large covariance matrix. We reparameterise a covariance matrix through the modified Cholesky decomposition of its inverse or the one-step-ahead predictive representation of the vector of responses and reduce the nonintuitive task of modelling covariance matrices to the familiar task o...
متن کاملThe effectiveness of threatening cognitive bias Modification on the reduction of test anxiety in twelfth grade students
Introduction: Test anxiety is one of the most common mental disorders that have a detrimental role on mental health and educational students. So, the aim of this study was to investigate the effectiveness of threatening cognitive bias correction on the reduction of test anxiety. Methods: In this quasi-experimental study, statistical community was Twelfth grade girl students in district 3 of Teh...
متن کامل